101 research outputs found

    Local variation of hashtag spike trains and popularity in Twitter

    Full text link
    We draw a parallel between hashtag time series and neuron spike trains. In each case, the process presents complex dynamic patterns including temporal correlations, burstiness, and all other types of nonstationarity. We propose the adoption of the so-called local variation in order to uncover salient dynamics, while properly detrending for the time-dependent features of a signal. The methodology is tested on both real and randomized hashtag spike trains, and identifies that popular hashtags present regular and so less bursty behavior, suggesting its potential use for predicting online popularity in social media.Comment: 7 pages, 7 figure

    A Markovian event-based framework for stochastic spiking neural networks

    Full text link
    In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks

    Finite-size and correlation-induced effects in Mean-field Dynamics

    Full text link
    The brain's activity is characterized by the interaction of a very large number of neurons that are strongly affected by noise. However, signals often arise at macroscopic scales integrating the effect of many neurons into a reliable pattern of activity. In order to study such large neuronal assemblies, one is often led to derive mean-field limits summarizing the effect of the interaction of a large number of neurons into an effective signal. Classical mean-field approaches consider the evolution of a deterministic variable, the mean activity, thus neglecting the stochastic nature of neural behavior. In this article, we build upon two recent approaches that include correlations and higher order moments in mean-field equations, and study how these stochastic effects influence the solutions of the mean-field equations, both in the limit of an infinite number of neurons and for large yet finite networks. We introduce a new model, the infinite model, which arises from both equations by a rescaling of the variables and, which is invertible for finite-size networks, and hence, provides equivalent equations to those previously derived models. The study of this model allows us to understand qualitative behavior of such large-scale networks. We show that, though the solutions of the deterministic mean-field equation constitute uncorrelated solutions of the new mean-field equations, the stability properties of limit cycles are modified by the presence of correlations, and additional non-trivial behaviors including periodic orbits appear when there were none in the mean field. The origin of all these behaviors is then explored in finite-size networks where interesting mesoscopic scale effects appear. This study leads us to show that the infinite-size system appears as a singular limit of the network equations, and for any finite network, the system will differ from the infinite system

    Spike Timing and Reliability in Cortical Pyramidal Neurons: Effects of EPSC Kinetics, Input Synchronization and Background Noise on Spike Timing

    Get PDF
    In vivo studies have shown that neurons in the neocortex can generate action potentials at high temporal precision. The mechanisms controlling timing and reliability of action potential generation in neocortical neurons, however, are still poorly understood. Here we investigated the temporal precision and reliability of spike firing in cortical layer V pyramidal cells at near-threshold membrane potentials. Timing and reliability of spike responses were a function of EPSC kinetics, temporal jitter of population excitatory inputs, and of background synaptic noise. We used somatic current injection to mimic population synaptic input events and measured spike probability and spike time precision (STP), the latter defined as the time window (Δt) holding 80% of response spikes. EPSC rise and decay times were varied over the known physiological spectrum. At spike threshold level, EPSC decay time had a stronger influence on STP than rise time. Generally, STP was highest (≤2.45 ms) in response to synchronous compounds of EPSCs with fast rise and decay kinetics. Compounds with slow EPSC kinetics (decay time constants>6 ms) triggered spikes at lower temporal precision (≥6.58 ms). We found an overall linear relationship between STP and spike delay. The difference in STP between fast and slow compound EPSCs could be reduced by incrementing the amplitude of slow compound EPSCs. The introduction of a temporal jitter to compound EPSCs had a comparatively small effect on STP, with a tenfold increase in jitter resulting in only a five fold decrease in STP. In the presence of simulated synaptic background activity, precisely timed spikes could still be induced by fast EPSCs, but not by slow EPSCs

    Neural Network Mechanisms Underlying Stimulus Driven Variability Reduction

    Get PDF
    It is well established that the variability of the neural activity across trials, as measured by the Fano factor, is elevated. This fact poses limits on information encoding by the neural activity. However, a series of recent neurophysiological experiments have changed this traditional view. Single cell recordings across a variety of species, brain areas, brain states and stimulus conditions demonstrate a remarkable reduction of the neural variability when an external stimulation is applied and when attention is allocated towards a stimulus within a neuron's receptive field, suggesting an enhancement of information encoding. Using an heterogeneously connected neural network model whose dynamics exhibits multiple attractors, we demonstrate here how this variability reduction can arise from a network effect. In the spontaneous state, we show that the high degree of neural variability is mainly due to fluctuation-driven excursions from attractor to attractor. This occurs when, in the parameter space, the network working point is around the bifurcation allowing multistable attractors. The application of an external excitatory drive by stimulation or attention stabilizes one specific attractor, eliminating in this way the transitions between the different attractors and resulting in a net decrease in neural variability over trials. Importantly, non-responsive neurons also exhibit a reduction of variability. Finally, this reduced variability is found to arise from an increased regularity of the neural spike trains. In conclusion, these results suggest that the variability reduction under stimulation and attention is a property of neural circuits

    Regulation of Spike Timing-Dependent Plasticity of Olfactory Inputs in Mitral Cells in the Rat Olfactory Bulb

    Get PDF
    The recent history of activity input onto granule cells (GCs) in the main olfactory bulb can affect the strength of lateral inhibition, which functions to generate contrast enhancement. However, at the plasticity level, it is unknown whether and how the prior modification of lateral inhibition modulates the subsequent induction of long-lasting changes of the excitatory olfactory nerve (ON) inputs to mitral cells (MCs). Here we found that the repetitive stimulation of two distinct excitatory inputs to the GCs induced a persistent modification of lateral inhibition in MCs in opposing directions. This bidirectional modification of inhibitory inputs differentially regulated the subsequent synaptic plasticity of the excitatory ON inputs to the MCs, which was induced by the repetitive pairing of excitatory postsynaptic potentials (EPSPs) with postsynaptic bursts. The regulation of spike timing-dependent plasticity (STDP) was achieved by the regulation of the inter-spike-interval (ISI) of the postsynaptic bursts. This novel form of inhibition-dependent regulation of plasticity may contribute to the encoding or processing of olfactory information in the olfactory bulb

    Accurate path integration in continuous attractor network models of grid cells

    Get PDF
    Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of ~10–100 meters and ~1–10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other

    STDP Allows Fast Rate-Modulated Coding with Poisson-Like Spike Trains

    Get PDF
    Spike timing-dependent plasticity (STDP) has been shown to enable single neurons to detect repeatedly presented spatiotemporal spike patterns. This holds even when such patterns are embedded in equally dense random spiking activity, that is, in the absence of external reference times such as a stimulus onset. Here we demonstrate, both analytically and numerically, that STDP can also learn repeating rate-modulated patterns, which have received more experimental evidence, for example, through post-stimulus time histograms (PSTHs). Each input spike train is generated from a rate function using a stochastic sampling mechanism, chosen to be an inhomogeneous Poisson process here. Learning is feasible provided significant covarying rate modulations occur within the typical timescale of STDP (∼10–20 ms) for sufficiently many inputs (∼100 among 1000 in our simulations), a condition that is met by many experimental PSTHs. Repeated pattern presentations induce spike-time correlations that are captured by STDP. Despite imprecise input spike times and even variable spike counts, a single trained neuron robustly detects the pattern just a few milliseconds after its presentation. Therefore, temporal imprecision and Poisson-like firing variability are not an obstacle to fast temporal coding. STDP provides an appealing mechanism to learn such rate patterns, which, beyond sensory processing, may also be involved in many cognitive tasks

    Balancing Feed-Forward Excitation and Inhibition via Hebbian Inhibitory Synaptic Plasticity

    Get PDF
    It has been suggested that excitatory and inhibitory inputs to cortical cells are balanced, and that this balance is important for the highly irregular firing observed in the cortex. There are two hypotheses as to the origin of this balance. One assumes that it results from a stable solution of the recurrent neuronal dynamics. This model can account for a balance of steady state excitation and inhibition without fine tuning of parameters, but not for transient inputs. The second hypothesis suggests that the feed forward excitatory and inhibitory inputs to a postsynaptic cell are already balanced. This latter hypothesis thus does account for the balance of transient inputs. However, it remains unclear what mechanism underlies the fine tuning required for balancing feed forward excitatory and inhibitory inputs. Here we investigated whether inhibitory synaptic plasticity is responsible for the balance of transient feed forward excitation and inhibition. We address this issue in the framework of a model characterizing the stochastic dynamics of temporally anti-symmetric Hebbian spike timing dependent plasticity of feed forward excitatory and inhibitory synaptic inputs to a single post-synaptic cell. Our analysis shows that inhibitory Hebbian plasticity generates ‘negative feedback’ that balances excitation and inhibition, which contrasts with the ‘positive feedback’ of excitatory Hebbian synaptic plasticity. As a result, this balance may increase the sensitivity of the learning dynamics to the correlation structure of the excitatory inputs
    corecore